STATS 300 A : Theory of Statistics Fall 2015 Lecture 11 — October 27
نویسندگان
چکیده
This lemma allows us to find a minimax estimator for a particular tractable submodel, and then show that the worst-case risk for the full model is equal to that of the submodel (that is, the worst-case risk doesn’t rise as you go to the full model). In this case, using the Lemma, we can argue that the estimator we found is also minimax for the full model. This was similar to how we justified minimaxity of the estimator of a Normal mean with bounded variance last lecture. Here’s a fairly simple example:
منابع مشابه
STATS 300 A : Theory of Statistics Fall 2015 Lecture 10 — October 22
X1, . . . , Xn iid ∼ N (θ, σ), with σ known. Our goal is to estimate θ under squared-error loss. For our first guess, pick the natural estimator X. Note that it has constant risk σ 2 n , which suggests minimaxity because we know that Bayes estimators with constant risk are also minimax estimators. However, X is not Bayes for any prior, because under squared-error loss unbiased estimators are Ba...
متن کاملSTATS 300 A : Theory of Statistics Fall 2015 Lecture 3 —
Before discussing today’s topic matter, let’s take a step back and situate ourselves with respect to the big picture. As mentioned in Lecture 1, a primary focus of this course is optimal inference. As a first step toward reasoning about optimality, we began to examine which statistics of the data that we observe are actually relevant in a given inferential task. We learned about lossless data r...
متن کاملStats 300 b : Theory of Statistics Winter 2018 Lecture 15 – February 27
Theorem 1. Let {Xn}n=1 ⊂ L∞(T ) be a sequence of stochastic processes on T . The followings are equivalent. (1) Xn converge in distribution to a tight stochastic process X ∈ L∞(T ); (2) both of the followings: (a) Finite Dimensional Convergence (FIDI): for every k ∈ N and t1, · · · , tk ∈ T , (Xn(t1), · · · , Xn(tk)) converge in distribution as n→∞; (b) the sequence {Xn} is asymptotically stoch...
متن کامل